Scheduling Dynamically Spawned Processes in MPI-2

نویسندگان

  • Márcia C. Cera
  • Guilherme P. Pezzi
  • Maurício L. Pilla
  • Nicolas Maillard
  • Philippe Olivier Alexandre Navaux
چکیده

The Message Passing Interface is one of the most well known parallel programming libraries. Although the standard MPI-1.2 norm only deals with a fixed number of processes, determined at the beginning of the parallel execution, the recently implemented MPI-2 standard provides primitives to spawn processes during the execution, and to enable them to communicate together. However, the MPI norm does not include any way to schedule the processes. This paper presents a scheduler module, that has been implemented with MPI-2, that determines, on-line (i.e. during the execution), on which processor a newly spawned process should be run, and with which priority. The scheduling is computed under the hypotheses that the MPI-2 program follows a Divide and Conquer model, for which well-known scheduling algorithms can be used. A detailed presentation of the implementation of the scheduler, as well as an experimental validation, are provided. A clear improvement in the balance of the load is shown by the experiments.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Improving the Dynamic Creation of Processes in MPI-2

The MPI-2 standard has been implemented for a few years in most of the MPI distributions. As MPI-1.2, it leaves it up to the user to decide when and where the processes must be run. Yet, the dynamic creation of processes, enabled by MPI-2, turns it harder to handle their scheduling manually. This paper presents a scheduler module, that has been implemented with MPI-2, that determines, on-line (...

متن کامل

Reconfiguration of MPI Processes at Runtime in a Numerical PDE Solver

The simulation of complex phenomena, described by partial differential equations, requires adaptive numerical methods and parallel computers. In adaptive methods the computational grid is automatically refined or coarsened to meet accuracy requirements in the solution. This leads to a dynamic change of workload. In a parallel computing context, the data must be redistributed between the process...

متن کامل

MPC: A Unified Parallel Runtime for Clusters of NUMA Machines

Over the last decade, Message Passing Interface (MPI) has become a very successful parallel programming environment for distributed memory architectures such as clusters. However, the architecture of cluster node is currently evolving from small symmetric shared memory multiprocessors towards massively multicore, Non-Uniform Memory Access (NUMA) hardware. Although regular MPI implementations ar...

متن کامل

Modular MPI and PVM Components

In the Ensemble methodology message passing applications are built from separate modular components. Processes spawned from modular components specify open communication interfaces (point-point or collective), which are bound at run time according to application composition directives. We give an overview of the concepts and tools. We present and compare the design of modular PVM and MPI compon...

متن کامل

Programming for Malleability with Hybrid MPI-2 and OpenMP – Experiences with a Simulation Program for Global Water Prognosis

This paper reports on our experiences in parallelizing WaterGAP, an originally sequential C++ program for global assessment and prognosis of water availability. The parallel program runs on a heterogeneous SMP cluster and combines different parallel programming paradigms: First, at its outer level, it uses master/slave communication implemented with MPI. Second, within the slave processes, mult...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2006